amd gpus
May we introduce: LUMI - LUMI
This blog post will give a general introduction to the LUMI system, the software system, and what programming models will be supported. We will also try to answer the most burning questions about the system and how you can get your programs running on it. In later posts, we will be doing more deep dives into specific parts of the system and software, how to prepare your codes, and how to port your codes. We will also be doing use case-specific deep dives on, for instance, how to prepare your ML workloads for LUMI. As announced on October 21st, LUMI will be an HPE Cray EX supercomputer consisting of several partitions targeted for different use cases.
Train neural networks using AMD GPUs and Keras
AMD is developing a new HPC platform, called ROCm. Its ambition is to create a common, open-source environment, capable to interface both with Nvidia (using CUDA) and AMD GPUs (further information). This tutorial will explain how to set-up a neural network environment, using AMD GPUs in a single or multiple configurations. On the software side: we will be able to run Tensorflow v1.12.0 as a backend to Keras on top of the ROCm kernel, using Docker. To install and deploy ROCm are required particular hardware/software configurations.